the application of heuristics and randomised search to overcoming the soft limits of computation, including the limitations of these methods
hill climbing on heuristic functions, the A* algorithm and the simulated annealing algorithm
Key skills
apply heuristics methods to design algorithms to solve computationally hard problems
explain the application of heuristics and randomised search approaches to intractable problems, including the graph colouring, 0-1 knapsack and travelling salesman problems
Hill Climbing
An algorithmic technique that relies solely on local heuristics.
At each step, it looks at one neighbour (sometimes chosen at random).If that neighbour has a better heuristic value (e.g. closer to the goal), move there.If not, stay put.Repeat until the goal is reached or you get stuck.
Hill Climbing
current = startwhile current != goal: neighbour = select_random_neighbour(current) if h(neighbour) < h(current): current = neighbourreturn current
Hill Climbing
Imagine finding the highest peak in a hilly terrain by always moving uphill.
Hill climbing is a metaphor, you can use other heuristics
Example:
Move through a grid to some goal
Find the maximum x value of a function \((f(x) = -x^2 + 5x)\)
Hill Climbing in Search
Hill Climbing can be used to search through a solution space
neighbours are similar solutions, typically differing by a small change.
Examples of Hill Climbing
Exam Timetabling
Problem: Reduce clashes between students.
Heuristic: Number of clashes in the timetable.
Rule: Swap two exams if it reduces clashes.
Travelling Salesman Problem (TSP)
Problem: Find a short tour through all cities.
Heuristic: Total length of the tour.
Rule: Swap two cities if it produces a shorter tour.
Knapsack Approximation
Problem: Maximise value of items under weight capacity.
Heuristic: Total value/weight ratio of the current selection.
Rule: Add an item if it increases the ratio; otherwise, don’t.
Sudoku Solver
Problem: Fill a 9×9 Sudoku grid.
Heuristic: Number of rule violations (duplicate numbers in rows, columns, boxes).
Rule: Change a cell only if it reduces violations.
Key Idea of Hill Climbing
Start with a current solution.
Measure it with a heuristic.
Make a small local change that improves the heuristic.
Stop when no further improvement is possible.
Problems
Can get stuck in local optima.
Can get trapped in plateaus (flat areas in the landscape).
Outcome depends on initial conditions.
Requires a good heuristic to guide the search.
Fixes
Use random restarts: If stuck, restart from a different initial solution.
repeat ‘k’ times and pick the best result.
Combine with Other Heuristics (Metaheuristics)
Use multiple heuristics or higher-level strategies to guide the search.
Example: simulated annealing
Variations of Hill Climbing
Steepest-Ascent: check all neighbours, move to the best one
Stochastic: pick a random neighbour, move if it improves
First-Choice: scan neighbours in given order, stop at the first improvement
All are greedy: only move if the heuristic improves
“Sometimes you have to take two steps back to take ten forward.”
Nipsey Hussle
Simulated Annealing
Adds an element of randomness
Sometimes accepts a worse solution (probabilistically)
Helps the search escape local maxima and explore further
Simulated Annealing
Annealing - heat treating metal to remove defects
The metaphor
Imagine cooling a piece of metal.
At high temperature, the atoms move freely and can “jump” to many positions.
As the metal cools, atoms settle into a stable crystal structure (low-energy state).
SA borrows this idea:
at first, the algorithm explores widely (even bad moves)
but as the “temperature” decreases it becomes more conservative.
Algorithm steps
Start with an initial solution.
At each step:
Pick a neighbour solution.
If it’s better → move there.
If it’s worse → maybe still move there with probability:
\[
P = e^{-\Delta E / T}
\]
where \(\Delta E\) = how much worse, \(T\) = current “temperature.” \(\Delta E = E(\text{new}) - E(\text{current})\)
Gradually decrease temperature.
Stop when system is “frozen.”
Algorithm
Simulated Annealingcurrent ← initial solutionT ← initial temperaturewhile system not frozen do pick a random neighbour ΔE ← E(neighbour) – E(current) if ΔE < 0 then current ← neighbour else P ← exp(–ΔE/T) r ← random number in [0,1] if r < P then current ← neighbour end if end if decrease T (T = T * cooling_rate)end whilereturn current
Why it works
Hill Climbing: only accepts better moves → gets stuck in local optima.
Simulated Annealing: sometimes accepts worse moves, especially at the start (high T).
As T lowers, it becomes more like hill climbing (only better moves).
This balance lets it escape local optima and explore more of the solution space.
Example — Travelling Salesman Problem (TSP)
Heuristic E(tour) = total distance of the tour.
Lower distance = better tour.
Neighbour solution = swap two cities in the order.
Start with a random tour of cities.
Swap two cities:
If distance decreases → accept.
If distance increases → accept with probability depending on T.
Early on: lots of exploration.
Later: fine-tuning near the best tours.
Probability and Temperature
The ratio \(\Delta E / T\) determines the acceptance probability of worse solutions.
Temperature \(T\) is decreased through each iteration.